© Edward Stull 2018
Edward StullUX Fundamentals for Non-UX Professionalshttps://doi.org/10.1007/978-1-4842-3811-0_9

9. Stability, Reliability, and Security

Edward Stull1 
(1)
Upper Arlington, Ohio, USA
 
On September 27, 1997, the USS Yorktown, a Ticonderoga-class missile cruiser, 567 feet long and 34 feet wide, drifted to a silent standstill off the coast of Virginia1 (see Figure 9-1). Moments before, a crew member had hit zero on his keyboard, inadvertently triggering a software bug and a subsequent cascade of system failures, including the ship’s propulsion. The ship sat dead in the water under the stars for over two hours—a billion dollars’ worth of American might defeated by a single keystroke.
../images/464548_1_En_9_Chapter/464548_1_En_9_Fig1_HTML.jpg
Figure 9-1.

USS Yorktown 2

The software bug was caused by an avoidable division-by-zero error. Take any number and divide it by zero. You get an undefined value. Mathematicians have dealt with such problems for hundreds of years, going back to at least 17343. However, the Yorktown’s problem went unnoticed until the error rippled throughout the ship’s control center network, destabilizing every connected machine like a seismic sea wave. Systems went offline. Engineers scrambled.

We notice stability only in its absence. Stability is invisible. It is the ship that sails. It is the network that performs. It is the app that opens. For it is the unrecognized achievement of any error-free experience. Yet, when stability fails, we are left with few options: try again later or never return. Neither is a preferable user experience.

The overall impact of stability issues is difficult to estimate, but a 2013 Cambridge University study4 estimated that software bugs alone cost the global economy $312 billion annually. The same study concluded that 50% of all development time was dedicated to resolving bugs. Such costs create a virtual sea of USS Yorktowns, drowning budgets and sinking projects.

Reliability

Treading alongside stability is reliability. Software Engineering , by Ian Somerville, states that reliability is “The probability of failure-free operation over a specified time, in a given environment, for a specific purpose.5” I like this definition because it frames reliability in relative terms. Reliability is relative to a specified time. Consider the website-hosting stalwart of “99% uptime.” That may sound impressive, until you realize that 99% of a year leaves 87 hours of instability, which is nearly one and a half hours per week. Your personal blog would likely be fine. But, with 99% reliability, Amazon.​com would endure a one-billion-dollar loss of net sales. Moreover, reliability is relative to a given environment and purpose. A 99% reliable website hosting may leave its users disappointed. However, 99% reliable SCUBA equipment would literally leave its users breathless.

Problems are inevitable. Errors happen. Apps crash. Sites timeout. We can’t plan for every outcome, but we can anticipate and address common issues:
  • Sudden outage: Use a monitoring service and be the first to know.

  • User frustration: Tweet your awareness of the outage–let users know you know.

  • Checkout errors: Set items to out-of-stock shortly before a planned outage.

  • Search penalties: Configure a 503 server response, which tells visiting bots that the outage is temporary.

In the end, stability and reliability are not attributes of software, but instead characteristics of an experience. Unstable and unreliable experiences lead to mistrust. Mistrust leads to abandonment. When users abandon, the entire endeavor sinks.

Security

If you placed an overseas call in the 1980s, you may have spoken over the TAT-8 transatlantic cable. It was a first. Never before had fiber optics crossed the Atlantic Ocean. The cable stretched across 3,200 miles of ocean floor, traversing great rift valleys, passing long-forgotten shipwrecks, and weathering undersea storms. TAT-8 was an impressive achievement; yet, it proved to be an insecure one.

Cables started crossing the Atlantic in the mid 1800s, but none were as powerful as TAT-8. The cable could carry thousands of phone calls and millions of data bytes. Some of the earliest Internet messages traveled along it. Despite its role in laying the groundwork for our modern day communications infrastructure, today we remember TAT-8 more for its curious effect on its surrounding ecosystem: sharks treated the multimillion-dollar cable like a chew toy.

Sharks had swum in the oceans for millennia, but they had likely never encountered anything quite like an undersea fiber optic cable. The vast array of digital communication pumping through TAT-8’s fiber optic veins generated strong electric fields. Sharks use electrical fields to hone in on prey animals through a process known as electroreception. Even in complete darkness, species such as the lemon shark (see Figure 9-2) can trace the faint bioelectronic signature of its favorite food, the parrotfish. In retrospect, we should not have been surprised that TAT-8’s power provided such a culinary attraction. Designers soon learned to shield the cables, blocking TAT-8’s electrical fields and securing its data from the powerful jaws of the unwelcome, undersea diners.
../images/464548_1_En_9_Chapter/464548_1_En_9_Fig2_HTML.jpg
Figure 9-2.

Lemon shark at the Sydney Aquarium6

When TAT-8 was completed, the securing of data was mechanical. Lines could snap. Connections could break. Sharks could chew on the cable, but they did not try to hack it. The millions of data bytes traveling among the connected academic and banking systems could flow unobstructed, relatively safe from manipulation and malfeasance.

Everything changed in 1988. Using only 99 lines of code, a computer program spread throughout the early Internet. As it replicated itself from machine to machine, the program slowed and crashed networks across the globe. The tiny Morris worm (as it would soon be called) presented a much greater security threat than any 400-pound shark ever could.

In the decades that followed, waves of malware, viruses, and worms exploited both the operating systems of computers as well as the behaviors of users.

Today, all digital experiences are prone to attack. Email, texts, chats, payment gateways, validation, data storage, and others are compromised with troubling regularity. PrivacyRights.​org reports that over ten billion records have been breached since 2005. The barely perceptible scent of data travels across the Internet to places we may never have previously imagined. Target’s 2013 data breach7 started with stolen credentials from a heating, ventilation, and air conditioning vendor. Nearly 27 million Department of Veterans Affairs records were stolen8 from a laptop taken during in a home burglary. Even data disconnected from the Internet can be stolen; Israeli researchers have proven that air-gapped data can be stolen by modulating the sound of a computer’s cooling fan and picked up by a nearby phone.9 Security begets insecurity.

If we realize how insecure our digital experiences are, we might choose to return to the days of telegrams, paper letters, and cash-only transactions. However, even before the digital age, we were not entirely secure. Our lives were beset with wire frauds, postal scams, and strong-arm robberies. We have simply replaced analog insecurity for digital insecurity. In many ways, our collective delusion of security is what keeps technology moving forward.

When designing experiences, our first layer of defense is absence. Information absent from your application is inherently secure. One cannot breach information that does not exist. Do you really need to save users’ credit card information? Phone numbers? Postal addresses? User names? Ask yourself, do you even need users to create an account at all? Because, once we obtain information from users, we must treat it like blood in the water.

Visual design and copywriting can connote security to users. We have all visited websites that did not meet our expectations. Perhaps we noticed a misspelling or a missing image. Maybe the website simply made us feel uneasy. Uneasiness leads to fear. Conversely, we have all visited websites that exceeded our expectations. Perhaps we read a witty bit of copywriting or viewed a gorgeous photo. Maybe the website simply made us feel comfortable. Comfort leads to confidence.

Interaction design affects perceptions of security. Simple form validations, such as a clear indicator for strong passwords, enhance perceived security (see Figure 9-3). Ensuring pages provide adequate confirmation and error messaging show users the application is cognizant of it being used. Consider a typical error message: frequently, applications appear to be just as bewildered by an error as its users are. It is as if a web server said, “Oh my, that was a surprise!” Some error messages may be unavoidable, but we determine their contents. Vague phrasing such as “Something went wrong” does little to assuage the fears of users when submitting their credit card details. This is the consequence of creators wishing to show that errors are rare—so rare that errors surprise even them. Stating “Sorry, your card was declined” tells a user exactly what is going on—no mysteries. As creators, we should treat errors as expected realities, not as inexplicable phenomena shared by users, designers, developers and copywriters alike.
../images/464548_1_En_9_Chapter/464548_1_En_9_Fig3_HTML.jpg
Figure 9-3.

Password strength indicator on appleid.​apple.​com

Lastly, consider the experience of security researchers—the people who uncover vulnerabilities in the products we create. Make it easy for researchers to report their findings; set up a dedicated email address. Be respectful and open-minded in terms of what you hear. No one likes to learn of their own weaknesses. Yet, a tiny indignity received today can save you from a horrific attack suffered tomorrow.

Security is not so much the absence of risk, but the confident acceptance of it. Security is a fundamental requirement for any experience. Fear leads to abandonment. Confidence leads to exploration. Users wade into the murky waters of the unknown and discover what lies beneath the surface of our creations.

Key Takeaways

  • All digital experiences can be attacked.

  • Insecurity leads to fear. Fear leads to abandonment.

  • Comfort leads to confidence. Confidence leads to exploration.

  • Do not ask users for unnecessary information.

  • Set up a dedicated email address for security researchers.

Questions to Ask Yourself

  • What more can I do to provide users a safe and secure experience?

  • Do I really need to save user information?

  • Do users really need to create an account?

  • Did I run a spellcheck?

  • Did I correct all obvious visual design bugs, such as broken images?

  • Am I requiring users to follow good security practices, such as create strong passwords?

  • Have I accounted for all errors that may occur within an experience?

  • How can I make it easy for security researchers to contact me?

  • What if all my users’ private information becomes public?

  • What if I am being hacked right now?

    Reset